Goto

Collaborating Authors

 higher-order factorization machine


Higher-Order Factorization Machines

Neural Information Processing Systems

Factorization machines (FMs) are a supervised learning approach that can use second-order feature combinations even when the data is very high-dimensional. Unfortunately, despite increasing interest in FMs, there exists to date no efficient training algorithm for higher-order FMs (HOFMs). In this paper, we present the first generic yet efficient algorithms for training arbitrary-order HOFMs. We also present new variants of HOFMs with shared parameters, which greatly reduce model size and prediction times while maintaining similar accuracy. We demonstrate the proposed approaches on four different link prediction tasks.


Reviews: Higher-Order Factorization Machines

Neural Information Processing Systems

It is an interesting, reasoned and promising approach. But there a few issues which I would like to have clarified in the rebuttal to accept the paper. The idea of the paper seems to strongly rely on the paper "Polynomial Networks and Factorization Machines: New Insights and Efficient Training Algorithms" by Blondel et al., where ANOVA kernels have already been used. Can you explain in more detail the difference and contributions in comparison to this paper? I'm wondering why such approaches cannot be applied to the given problem or why it is not better to adapt them to HOFMs.

  Country: Africa > Senegal > Kolda Region > Kolda (0.07)
  Genre: Personal (0.39)

Higher-Order Factorization Machines

Blondel, Mathieu, Fujino, Akinori, Ueda, Naonori, Ishihata, Masakazu

Neural Information Processing Systems

Factorization machines (FMs) are a supervised learning approach that can use second-order feature combinations even when the data is very high-dimensional. Unfortunately, despite increasing interest in FMs, there exists to date no efficient training algorithm for higher-order FMs (HOFMs). In this paper, we present the first generic yet efficient algorithms for training arbitrary-order HOFMs. We also present new variants of HOFMs with shared parameters, which greatly reduce model size and prediction times while maintaining similar accuracy. We demonstrate the proposed approaches on four different link prediction tasks.